February 1, 2024 0 Comments

Promoters who wish to use accurate information must engage with AI that has had its biases reduced. Read on to find out the essentials

Marketing and marketing technology (Martech) revolve around algorithms. They fuel the AI employed in many contexts, including data analysis, data collection, audience segmentation, and many more. AI is trusted by marketers because it provides objective information. It isn’t always the case, though.

Algorithms, in our minds, are apolitical, rule-based systems. That is precisely what they are in and of themselves. They are devoid of thought. Yet, the assumptions and standards of whoever wrote the rule are implicit in the rule itself. This is one method in which bias enters computer programmes. The other, and arguably more crucial, is by means of the information it is taught to recognise.

Facial recognition algorithms, for instance, are typically trained using data sets consisting primarily of pictures of people with lighter skin tones. Because of this, they have a terrible track record of spotting people of colour. Twenty-eight congresspeople, mostly people of colour, were misidentified as criminals in one occasion. Due to the ineffectiveness of the solutions proposed, some businesses, Microsoft in particular, have ceased selling such systems to law enforcement agencies.

Autoregressive language models that employ deep learning to generate text are at the heart of ChatGPT, Google’s Bard, and other AI-driven chatbots. This learning is trained on massive data collection, which could include anything published online within a certain time period and is, thus, rife with inaccuracy, disinformation, and bias.

The quality of the results is proportional to the information available:

Paul Roetzer, CEO of The Marketing AI Institute, adds, “If you give it access to the internet, it essentially has whatever prejudice existing.” “In many respects, it’s just a reflection on humanity.”

These systems’ designers are aware of this fact.

According to Christopher Penn, co-founder and chief data scientist of TrustInsights.ai, “negative sentiment is more strongly related with African American female names than any other name set within there” (OpenAI’s disclosures and disclaimers regarding ChatGPT). “Thus, if Letitia is given a lower score than Laura in completely automated black box sentiment modelling when the criteria for evaluation are first names, there is an issue. You are helping to spread these prejudices.

To paraphrase from OpenAI’s best practises documents: “From hallucinating erroneous information to offensive outputs, to bias, and much more, language models may not be acceptable for every use case without considerable adjustments.”

When faced with this dilemma, what should a marketer do?

If marketers want to use quality data, they must take steps to eliminate prejudice. Removing it will always be an elusive ideal that we can work towards but never fully realise.

According to Christopher Penn, “what marketers and Martech businesses should be thinking is, how can we use this on the training data that goes in so that the model has fewer biases to start with that we have to counteract later?” It’s true what they say: “If you don’t put rubbish in, you won’t have to filter junk out.”

Instruments exist that can aid in the process of eliminating prejudice. Five of the most well-known are listed here:

  1. Possible Worlds from If you suspect that your model is biased, you can use Google, an open-source programme designed for the purpose, to experiment with your data, generate graphs, and see if your changes meet your criteria.
  2. IBM’s AI Fairness 360 is an open-source set of tools for identifying and removing bias from AI systems.
  3. Microsoft’s Fairlearn is meant to be a guide through the trade-offs between equality and model efficiency.
  4. A tool called Local Interpretable Model-Agnostic Explanations (LIME) developed by researcher Marco Tulio Ribeiro allows users to change different parts of a model to gain insight and identify the origin of any inherent bias.
  5. By measuring the relative significance of the model’s inputs, FairML, developed by MIT’s Julius Adebayo, is a comprehensive toolbox for auditing predictive models.

When you know what you’re looking for, “they’re good,” as Penn puts it. When you have no idea what’s inside the box, they’re not as useful.

It’s simple to judge inputs:

He cites AI Fairness 360 as an example, saying that it can be fed a list of protected classes along with a sequence of loan decisions. Then, it can sound an alarm when the model begins to drift in a biased direction by identifying any biases in the training data or the model itself.

When it comes to copy or imagery, “when you’re doing generation it’s a lot harder to achieve that,” Penn explains. “The existing technologies are primarily intended for tabular rectangular data with clear effects that you’re seeking to protect against.”

Tools like ChatGPT and Bard, which are used to generate content, place enormous computational demands on their users’ computers. Increased protections against prejudice will have a notable effect on their efficiency. This makes an already challenging task even more so, so don’t hold your breath for a quick fix.

Absolutely no time for delay:

Marketers can’t afford to wait for the models to repair themselves because of the damage to their brands. Asking “what if” is a crucial part of the preventative measures that need to be taken for AI-generated material. Asking someone involved with diversity, equity, and inclusion is a great place to start.

Penn argues that this is where diversity, equity, and inclusion (DEI) initiatives can truly succeed, despite the fact that many organisations pay them lip respect. Instruct the diversity group to… check the models’ results and give a verdict like “This is not OK” or “This is Fine.” And then have that embedded in procedures, as though DEI approved it.

How a company deals with bias in all these systems and how it defines it is a key indicator of its culture.

According to Paul Roetzer, “each organisation is going to have to define its own ideas about how it develops and uses this technology.” And beyond that subjective level of saying, “this is what we believe prejudice to be and we will, or will not, employ instruments that allow this to happen,” I don’t see how else it’s fixed.

Leave a Comment

Your email address will not be published.